Lecture � Rolls, Computational Neuroscience

Greg Detre

@11 on 4 February 2002

Prof. Rolls, 5 of 8

 

1-layer perceptron � having target outputs is biologically implausible

will find the LMS error

need to be linearly separable

adaline = linear SLP (Widrow + Hoff) � adaptive linear element

 

brain doesn't suffer from linear separability issues because huge number of inputs

Rolls says that brain doesn't do logic in the way Minsky + Papert said

 

PA:������ pmax C x storage

 

back projections � don't have 1 input per neuron

(except maybe in the cerebellum)

5% inhibitory vs excitatory

 

calculating the error is difficult biologically � well, expensive and not found in the brain

 

bio plausibility

local learning rule

target outputs are a no-no

calculate the error from the target-actual firing

recoding is the trick to solving linearly inseparable problems

the brain doesn't channel through small numbers of hidden neurons

 

brain has difficulty with discrete-step processing (like in Elman)

settle too fast

neuronal precision is limited � Rolls thinks synapses only really have 8 graded resolutions

brain doesn't like logic, or parity

operates in clamped condition

though there is some synaptic adaptation in forward inputs, allowing recurrent collaterals to settle unclamped

also clamped backprojection

so-called serial performance might result from slow constraint satisfaction

sensory receptor adaptation???

attractor nets are not so good at remembering continuous firing rate distributions � it�s more efficient to effectively binary firing rates

NMDA may do that � only alter the weights if it�s a high firing rate

 

reinforcement learning

single reward/penalty N for each N/the whole net (Barto & Sutton, 1988)

slow learning

reward vector = just a MLP with noise

useful source of noise in the brain? probably

 

Questions

CS vs UCS???

difference between 1-layer perceptron + PA??? delta rule (supervising target output = error rather than value)

delta rule � learning involves the error rather than subtracting sparseness

LMS error vs �least mean modulus� error???

cable theory??? shunting???

would a 2-layer competitive network be useful???

how do rewards help in unsupervised nets???

www.cns.ox.ac.uk � GAs

trace rule for invariant object recognition???

CA3 � hippocampus???

mossy fibres to CA3 Ns??? colinergic trick input???

RC membranes???

multi-stage PA as AA???